6,625 research outputs found

    Distributed computing in the LHC era

    Get PDF
    A large, worldwide distributed, scientific community is running intensively physics analyses on the first data collected at LHC. In order to prepare for this unprecedented computing challenge, the four LHC experiments have developed distributed computing models capable of serving, processing and archiving the large number of events produced by data taking, amounting to about 15 petabytes per year. The experiments workflows for event reconstruction from raw data, production of simulated events and physics analysis on skimmed data generate hundreds of thousands of jobs per day, running on a complex distributed computing fabric. All this is possible thanks to reliable Grid services, which have been developed, deployed at the needed scale and thouroughly tested by the WLCG Collaboration during the last ten years. In order to provide a concrete example, this paper concentrates on CMS computing model and CMS experience with the first data at LHC

    A new development cycle of the Statistical Toolkit

    Full text link
    The Statistical Toolkit is an open source system specialized in the statistical comparison of distributions. It addresses requirements common to different experimental domains, such as simulation validation (e.g. comparison of experimental and simulated distributions), regression testing in the course of the software development process, and detector performance monitoring. Various sets of statistical tests have been added to the existing collection to deal with the one sample problem (i.e. the comparison of a data distribution to a function, including tests for normality, categorical analysis and the estimate of randomness). Improved algorithms and software design contribute to the robustness of the results. A simple user layer dealing with primitive data types facilitates the use of the toolkit both in standalone analyses and in large scale experiments.Comment: To be published in the Proc. of CHEP (Computing in High Energy Physics) 201

    The computing of the LHC experiments

    Get PDF
    The LHC experiments have thousands of collaborators distributed worldwide who expect to run their physics analyses on the collected data. Each of the four experiments will run hundreds of thousands of jobs per day, including event reconstruction from raw data, analysis on skimmed data, and production of simulated events. At the same time tens of petabytes of data will have to be easily available on a complex distributed computing fabric for a period of at least ten years. These challenging goals have prompted the development and deployment of reliable Grid services, which have been thouroughly tested and put at the needed scale over the last years. This paper concentrates on CMS computing needs for data taking at LHC and highlights the INFN-Grid contribution to the effort

    Integrated Depths for Partially Observed Functional Data

    Get PDF
    Partially observed functional data are frequently encountered in applications and are the object of an increasing interest by the literature. We here address the problem of measuring the centrality of a datum in a partially observed functional sample. We propose an integrated functional depth for partially observed functional data, dealing with the very challenging case where partial observability can occur systematically on any observation of the functional dataset. In particular, differently from many techniques for partially observed functional data, we do not request that some functional datum is fully observed, nor we require that a common domain exist, where all of the functional data are recorded. Because of this, our proposal can also be used in those frequent situations where reconstructions methods and other techniques for partially observed functional data are inapplicable. By means of simulation studies, we demonstrate the very good performances of the proposed depth on finite samples. Our proposal enables the use of benchmark methods based on depths, originally introduced for fully observed data, in the case of partially observed functional data. This includes the functional boxplot, the outliergram and the depth versus depth classifiers. We illustrate our proposal on two case studies, the first concerning a problem of outlier detection in German electricity supply functions, the second regarding a classification problem with data obtained from medical imaging. for this article are available online

    An electromagnetic shashlik calorimeter with longitudinal segmentation

    Get PDF
    A novel technique for longitudinal segmentation of shashlik calorimeters has been tested in the CERN West Area beam facility. A 25 tower very fine samplings e.m. calorimeter has been built with vacuum photodiodes inserted in the first 8 radiation lengths to sample the initial development of the shower. Results concerning energy resolution, impact point reconstruction and electron/pion separation are reported.Comment: 13 pages, 12 figure

    Mining discharge letters for diagnoses validation and quality assessment

    Get PDF
    We present two projects where text mining techniques are applied to free text documents written by clinicians. In the first, text mining is applied to discharge letters related to patients with diag-noses of acute myocardial infarction (by ICD9CM coding). The aim is extracting information on diagnoses to validate them and to integrate administrative databases. In the second, text mining is applied to discharge letters related to patients that received a diagnosis of heart failure (by ICD9CM coding). The aim is assessing the presence of follow-up instructions of doctors to patients, as an aspect of information continuity and of the continuity and quality of care. Results show that text mining is a promising tool both for diagnoses validation and quality of care as-sessment

    A Depth for Censured Functional Data

    Get PDF
    Censured functional data are becoming more recurrent in applications. In those cases, the existing depth measure are useless. In this paper, an approach for measuring depths of censured functional data is presented. Its performance for finite samples is tested by simulation, showing that the new depth agrees with a integrated depth for uncensured functional data.Antonio ElĂ­as is supported by the Spanish Ministerio de EducaciĂłn, Cultura y Deporte under grant FPU15/00625. Antonio ElĂ­as and RaĂșl JimĂ©nez are partially supported by the Spanish Ministerio de EconomĂ­a y Competitividad under grant ECO2015-66593-P

    The Impact of the Level of Feed-On-Offer Available to Merino Ewes During Winter-Spring on the Wool Production of Their Progeny as Adults

    Get PDF
    New opportunities for developing optimum ewe management systems, based on achieving liveweight and body condition score (CS) targets at critical stages of the reproductive cycle, have emerged from the acceptance that nutrition during pregnancy can have substantial impacts on the lifetime wool performance of the progeny (Kelly et al,. 1996). However, most studies of the impacts of nutrition on foetal growth and development tended to focus on late pregnancy and have also only considered extreme nutritional regimes often outside the boundaries of commercial reality. Hence, the Lifetime Wool team (Thompson & Oldham, 2004) conducted dose-response experiments to determine the levels of feed-on-offer (FOO; kg dry matter/ha; Hyder et al., 2004) needed at different stages of the reproductive cycle to optimise both wool and meat production per ha in the short term and the lifetime performance of the progeny in the long term. This paper reports the response in the first two years of the experiment of clean fleece weight (CFW) and fibre diameter (FD) of the progeny as adults to the level of FOO available to their mother in late pregnancy and lactation
    • 

    corecore